Goto

Collaborating Authors

 traditional chinese medicine


TCM-5CEval: Extended Deep Evaluation Benchmark for LLM's Comprehensive Clinical Research Competence in Traditional Chinese Medicine

Huang, Tianai, Chen, Jiayuan, Lu, Lu, Chen, Pengcheng, Li, Tianbin, Han, Bing, Tang, Wenchao, Xu, Jie, Li, Ming

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated exceptional capabilities in general domains, yet their application in highly specialized and culturally-rich fields like Traditional Chinese Medicine (TCM) requires rigorous and nuanced evaluation. Building upon prior foundational work such as TCM-3CEval, which highlighted systemic knowledge gaps and the importance of cultural-contextual alignment, we introduce TCM-5CEval, a more granular and comprehensive benchmark. TCM-5CEval is designed to assess LLMs across five critical dimensions: (1) Core Knowledge (TCM-Exam), (2) Classical Literacy (TCM-LitQA), (3) Clinical Decision-making (TCM-MRCD), (4) Chinese Materia Medica (TCM-CMM), and (5) Clinical Non-pharmacological Therapy (TCM-ClinNPT). We conducted a thorough evaluation of fifteen prominent LLMs, revealing significant performance disparities and identifying top-performing models like deepseek\_r1 and gemini\_2\_5\_pro. Our findings show that while models exhibit proficiency in recalling foundational knowledge, they struggle with the interpretative complexities of classical texts. Critically, permutation-based consistency testing reveals widespread fragilities in model inference. All evaluated models, including the highest-scoring ones, displayed a substantial performance degradation when faced with varied question option ordering, indicating a pervasive sensitivity to positional bias and a lack of robust understanding. TCM-5CEval not only provides a more detailed diagnostic tool for LLM capabilities in TCM but aldso exposes fundamental weaknesses in their reasoning stability. To promote further research and standardized comparison, TCM-5CEval has been uploaded to the Medbench platform, joining its predecessor in the "In-depth Challenge for Comprehensive TCM Abilities" special track.


TianHui: A Domain-Specific Large Language Model for Diverse Traditional Chinese Medicine Scenarios

Yin, Ji, He, Menglan, Zhang, Yujie, Zhang, Linshuai, Ma, Tingting, Tian, Ce, Wu, Jie, Xu, Lin, Jiang, Tao

arXiv.org Artificial Intelligence

Background: Currently, domain - specific large language models (LLMs) in traditional Chinese medicine (TCM) are primarily designed for clinical practice and medical education, yet they demonstrate substantial limitations when applied to research contexts owing to inadeq uate adaptability to complex tasks, thereby constraining their scientific utility. Moreover, the absence of comprehensive evaluation datasets and computational resource constraints hinder rigorous performance assessments and prevent extensive comparative o r ablation experiments, ultimately resulting in suboptimal model performance and weakened persuasiveness. Objective: To address these challenges, this study proposed a method for constructing a specialized LLM for the TCM domain based on contextual data integration and domain knowledge fusion and successfully developed a privatized LLM for the TCM profession, TianHui. Methods: Firstly, we acquired a large amount of TCM data, including academic literature resources, published book materials, online public data, and other supplementary materials, and pre - processed them to finally generate the 0.97G unsupervised dataset and 611312 QAs. Then, we adopted a phased training strategy (Pre - Training (PT) and Supervised Fine - Tuning (SFT)) and integrated three key technologies, Quantized Low - Rank Adaptation (QLoRA) parameter efficient fine - tuning, DeepSpeed Stage 2 distributed traini ng optimization, and Flash Attention 2 accelerated computation, to achieve optimal allocation of computational resources while guaranteeing training stability. Finally, we evaluated TianHui using 12 different types of benchmark test datasets and conducted extensive comparison experiments and ablation experiments. Results: The benchmark test data showed that TianHui demonstrated excellent performance in 12 TCM - related application scenarios. It ranked in the top three in each evaluation index in six test datasets: APQ, TCMCD, HFR, HCCA, DHPE, and TLAW. Meanwhile, it achieved optimal performance in all indicators of the six test data sets: TCMEE, APR, GCPMI, TCMKQA, TCMRC, and ADTG.


Leveraging Group Relative Policy Optimization to Advance Large Language Models in Traditional Chinese Medicine

Xie, Jiacheng, Zeng, Shuai, Yu, Yang, Tang, Xiaoting, An, Guanghui, Xu, Dong

arXiv.org Artificial Intelligence

Traditional Chinese Medicine (TCM) presents a rich and structurally unique knowledge system that challenges conventional applications of large language models (LLMs). Although previous TCM - specific LLMs have shown progress through supervised fine - tuning, they often face limitations in alignment, data quality, and evaluation consistency. In this study, we introduce Ladder - base, the first TCM - focused LLM trained with Group Relative Policy Optimization (GRPO), a reinforcement learning method that improves reasoning and factual consistency by optimizing response selection based on intra - group comparisons. Ladder - base is built upon the Qwen2.5 - 7B - Instruct foundation model and trained exclusively on the textual subset of the TCM - Ladder benchmark, using 80 percent of the data for training and the remaining 20 percent split evenly between validation and test sets. Through standardized evaluation, Ladder - base demonstrates superior performance across multiple reasoning metrics when compared to both state - of - the - art general - purpose LLMs such as GPT - 4, Gemini 2.5, Claude 3, and Qwen3 and domain - specific TCM models including BenTsao, HuatuoGPT2, and Zhongjing. These findings suggest that GRPO provides an effective and efficient strategy for aligning LLMs with expert - level reasoning in traditional medical domains and supports the development of trustworthy and clinically grounded TCM artificial intelligence systems.


BenCao: An Instruction-Tuned Large Language Model for Traditional Chinese Medicine

Xie, Jiacheng, Yu, Yang, Chen, Yibo, Zhang, Hanyao, Zhao, Lening, He, Jiaxuan, Jiang, Lei, Tang, Xiaoting, An, Guanghui, Xu, Dong

arXiv.org Artificial Intelligence

Traditional Chinese Medicine (TCM), with a history spanning over two millennia, plays a role in global healthcare. However, applying large language models (LLMs) to TCM remains challenging due to its reliance on holistic reasoning, implicit logic, and multimodal diagnostic cues. Existing TCM-domain LLMs have made progress in text-based understanding but lack multimodal integration, interpretability, and clinical applicability. To address these limitations, we developed BenCao, a ChatGPT-based multimodal assistant for TCM, integrating structured knowledge bases, diagnostic data, and expert feedback refinement. BenCao was trained through natural language instruction tuning rather than parameter retraining, aligning with expert-level reasoning and ethical norms specific to TCM. The system incorporates a comprehensive knowledge base of over 1,000 classical and modern texts, a scenario-based instruction framework for diverse interactions, a chain-of-thought simulation mechanism for interpretable reasoning, and a feedback refinement process involving licensed TCM practitioners. BenCao connects to external APIs for tongue-image classification and multimodal database retrieval, enabling dynamic access to diagnostic resources. In evaluations across single-choice question benchmarks and multimodal classification tasks, BenCao achieved superior accuracy to general-domain and TCM-domain models, particularly in diagnostics, herb recognition, and constitution classification. The model was deployed as an interactive application on the OpenAI GPTs Store, accessed by nearly 1,000 users globally as of October 2025. This study demonstrates the feasibility of developing a TCM-domain LLM through natural language-based instruction tuning and multimodal integration, offering a practical framework for aligning generative AI with traditional medical reasoning and a scalable pathway for real-world deployment.


ChiMed 2.0: Advancing Chinese Medical Dataset in Facilitating Large Language Modeling

Tian, Yuanhe, Liu, Junjie, Kou, Zhizhou, Li, Yuxiang, Song, Yan

arXiv.org Artificial Intelligence

Building high-quality data resources is crucial for advancing artificial intelligence research and applications in specific domains, particularly in the Chinese medical domain. Existing Chinese medical datasets are limited in size and narrow in domain coverage, falling short of the diverse corpora required for effective pre-training. Moreover, most datasets are designed solely for LLM fine-tuning and do not support pre-training and reinforcement learning from human feedback (RLHF). In this paper, we propose a Chinese medical dataset named ChiMed 2.0, which extends our previous work ChiMed, and covers data collected from Chinese medical online platforms and generated by LLMs. ChiMed 2.0 contains 204.4M Chinese characters covering both traditional Chinese medicine classics and modern general medical data, where there are 164.8K documents for pre-training, 351.6K question-answering pairs for supervised fine-tuning (SFT), and 41.7K preference data tuples for RLHF. To validate the effectiveness of our approach for training a Chinese medical LLM, we conduct further pre-training, SFT, and RLHF experiments on representative general domain LLMs and evaluate their performance on medical benchmark datasets. The results show performance gains across different model scales, validating the dataset's effectiveness and applicability.


OpenTCM: A GraphRAG-Empowered LLM-based System for Traditional Chinese Medicine Knowledge Retrieval and Diagnosis

He, Jinglin, Guo, Yunqi, Lam, Lai Kwan, Leung, Waikei, He, Lixing, Jiang, Yuanan, Wang, Chi Chiu, Xing, Guoliang, Chen, Hongkai

arXiv.org Artificial Intelligence

Traditional Chinese Medicine (TCM) represents a rich repository of ancient medical knowledge that continues to play an important role in modern healthcare. Due to the complexity and breadth of the TCM literature, the integration of AI technologies is critical for its modernization and broader accessibility. However, this integration poses considerable challenges, including the interpretation of obscure classical Chinese texts and the modeling of intricate semantic relationships among TCM concepts. In this paper, we develop OpenTCM, an LLM-based system that combines a domain-specific TCM knowledge graph and Graph-based Retrieval-Augmented Generation (GraphRAG). First, we extract more than 3.73 million classical Chinese characters from 68 gynecological books in the Chinese Medical Classics Database, with the help of TCM and gynecology experts. Second, we construct a comprehensive multi-relational knowledge graph comprising more than 48,000 entities and 152,000 interrelationships, using customized prompts and Chinese-oriented LLMs such as DeepSeek and Kimi to ensure high-fidelity semantic understanding. Last, we empower OpenTCM with GraphRAG, enabling high-fidelity ingredient knowledge retrieval and diagnostic question-answering without model fine-tuning. Experimental evaluations demonstrate that OpenTCM achieves mean expert scores (MES) of 4.378 in ingredient information retrieval and 4.045 in diagnostic question-answering tasks, outperforming state-of-the-art solutions in real-world TCM use cases.


TCM-3CEval: A Triaxial Benchmark for Assessing Responses from Large Language Models in Traditional Chinese Medicine

Huang, Tianai, Lu, Lu, Chen, Jiayuan, Liu, Lihao, He, Junjun, Zhao, Yuping, Tang, Wenchao, Xu, Jie

arXiv.org Artificial Intelligence

Large language models (LLMs) excel in various NLP tasks and modern medicine, but their evaluation in traditional Chinese medicine (TCM) is underexplored. To address this, we introduce TCM3CEval, a benchmark assessing LLMs in TCM across three dimensions: core knowledge mastery, classical text understanding, and clinical decision-making. We evaluate diverse models, including international (e.g., GPT-4o), Chinese (e.g., InternLM), and medical-specific (e.g., PLUSE). Results show a performance hierarchy: all models have limitations in specialized subdomains like Meridian & Acupoint theory and Various TCM Schools, revealing gaps between current capabilities and clinical needs. Models with Chinese linguistic and cultural priors perform better in classical text interpretation and clinical reasoning. TCM-3CEval sets a standard for AI evaluation in TCM, offering insights for optimizing LLMs in culturally grounded medical domains. The benchmark is available on Medbench's TCM track, aiming to assess LLMs' TCM capabilities in basic knowledge, classic texts, and clinical decision-making through multidimensional questions and real cases.


From Metaphor to Mechanism: How LLMs Decode Traditional Chinese Medicine Symbolic Language for Modern Clinical Relevance

Tang, Jiacheng, Wu, Nankai, Gao, Fan, Dai, Chengxiao, Zhao, Mengyao, Zhao, Xinjie

arXiv.org Artificial Intelligence

--Metaphorical expressions are abundant in Traditional Chinese Medicine (TCM), conveying complex disease mechanisms and holistic health concepts through culturally rich and often abstract terminology. Bridging these metaphors to anatomically driven Western medical (WM) concepts poses significant challenges for both automated language processing and real-world clinical practice. T o address this gap, we propose a novel multi-agent and chain-of-thought (CoT) framework designed to interpret TCM metaphors accurately and map them to WM pathophysiology. Specifically, our approach combines domain-specialized agents (TCM Expert, WM Expert) with a Coordinator Agent, leveraging stepwise chain-of-thought prompts to ensure transparent reasoning and conflict resolution. We detail a methodology for building a metaphor-rich TCM dataset, discuss strategies for effectively integrating multi-agent collaboration and CoT reasoning, and articulate the theoretical underpinnings that guide metaphor interpretation across distinct medical paradigms. We present a comprehensive system design and highlight both the potential benefits and limitations of our approach, while leaving placeholders for future experimental validation. Our work aims to support clinical decision-making, cross-system educational initiatives, and integrated healthcare research, ultimately offering a robust scaffold for reconciling TCM's symbolic language with the mechanistic focus of Western medicine.


Graph Neural Networks for Quantifying Compatibility Mechanisms in Traditional Chinese Medicine

Zeng, Jingqi, Jia, Xiaobin

arXiv.org Artificial Intelligence

Through the rational compatibility of herbal medicines, TCM achieves synergistic effects and toxicity reduction (1). This core principle has demonstrated significant advantages in treating complex diseases. For example, Lianhua Qingwen Capsules have shown remarkable efficacy in alleviating symptoms and reducing hospitalization time for COVID-19 patients (2, 3). Additionally, PHY906, developed by Yale University based on the traditional Huangqin Decoction, has improved colorectal cancer treatment outcomes as a chemotherapy adjuvant (4). In recent years, the rapid development of artificial intelligence (AI) has introduced novel opportunities for investigating the complex mechanisms underlying TCM (5, 6). AI's exceptional data processing capabilities, particularly in multi-dimensional data analysis and complex relationship modeling, are transforming traditional medicine from experience-driven to datadriven paradigms (7-9). Notably, Graph Artificial Intelligence (GraphAI) offers a unique toolkit for exploring complex network-structured data by integrating knowledge graphs, graph computation, and graph neural networks (GNNs) (10, 11). The core challenges of TCM compatibility--complex interactions involving multiple components, targets, and pathways-- align closely with GraphAI's strengths in handling intricate relationships (12-14).


Traditional Chinese Medicine Case Analysis System for High-Level Semantic Abstraction: Optimized with Prompt and RAG

Xu, Peng, Wu, Hongjin, Wang, Jinle, Lin, Rongjia, Tan, Liwei

arXiv.org Artificial Intelligence

This paper details a technical plan for building a clinical case database for Traditional Chinese Medicine (TCM) using web scraping. Leveraging multiple platforms, including 360doc, we gathered over 5,000 TCM clinical cases, performed data cleaning, and structured the dataset with crucial fields such as patient details, pathogenesis, syndromes, and annotations. Using the $Baidu\_ERNIE\_Speed\_128K$ API, we removed redundant information and generated the final answers through the $DeepSeekv2$ API, outputting results in standard JSON format. We optimized data recall with RAG and rerank techniques during retrieval and developed a hybrid matching scheme. By combining two-stage retrieval method with keyword matching via Jieba, we significantly enhanced the accuracy of model outputs.